Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
                                            Some full text articles may not yet be available without a charge during the embargo (administrative interval).
                                        
                                        
                                        
                                            
                                                
                                             What is a DOI Number?
                                        
                                    
                                
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
- 
            An ideal traffic simulator replicates the realistic long-term point-to-point trip that a self-driving system experiences during deployment. Prior models and benchmarks focus on closed-loop motion simulation for initial agents in a scene. This is problematic for long-term simulation. Agents enter and exit the scene as the ego vehicle enters new regions. We propose InfGen, a unified next-token prediction model that performs interleaved closed-loop motion simulation and scene generation. InfGen automatically switches between closed-loop motion simulation and scene generation mode. It enables stable long-term rollout simulation. InfGen performs at the state-of-the-art in short-term (9s) traffic simulation, and significantly outperforms all other methods in long-term (30s) simulation.more » « lessFree, publicly-accessible full text available August 5, 2026
- 
            Abstract Salt marshes sequester a disproportionately large amount of carbon dioxide (CO2) from the atmosphere through high rates of photosynthesis and carbon burial. Climate change could potentially alter this carbon sink, particularly the response of vegetation to environmental stressors that can decrease photosynthesis. Midday depression of gross primary production (GPP), characterized by a decline in photosynthesis during midday, has been documented in multiple ecosystems as a response to drought, high temperatures, and other stressors linked to climate change. Yet, midday depression has not been thoroughly investigated in salt marsh ecosystems. Here, we show that the midday depression of GPP in aSpartina alterniflorasalt marsh on the Eastern Shore of Virginia was ubiquitous and occurred on 76% of the 283 days studied during the 2019–2022 growing seasons. GPP was estimated from eddy covariance measurements with flux partitioning. Using random forest, we found that the daily maximum tidal height and air temperature were the strongest predictors of midday depression of GPP, with lower high tides and warmer temperatures associated with more severe depression. This result suggests midday depression occurs when GPP decreases in the afternoon in response to salinity and water stress. To our knowledge, this is the first examination of midday depression of photosynthesis inS.alternifloraat the ecosystem scale. Our results highlight the potential of climate change to increase midday depression of photosynthesis and ultimately weaken the salt marsh carbon sink.more » « less
- 
            While image captioning provides isolated descriptions for individual images, and video captioning offers one single narrative for an entire video clip, our work explores an important middle ground: progress-aware video captioning at the frame level. This novel task aims to generate temporally fine-grained captions that not only accurately describe each frame but also capture the subtle progression of actions throughout a video sequence. Despite the strong capabilities of existing leading vision language models, they often struggle to discern the nuances of frame-wise differences. To address this, we propose ProgressCaptioner, a captioning model designed to capture the fine-grained temporal dynamics within an action sequence. Alongside, we develop the FrameCap dataset to support training and the FrameCapEval benchmark to assess caption quality. The results demonstrate that ProgressCaptioner significantly surpasses leading captioning models, producing precise captions that accurately capture action progression and set a new standard for temporal precision in video captioning. Finally, we showcase practical applications of our approach, specifically in aiding keyframe selection and advancing video understanding, highlighting its broad utility.more » « lessFree, publicly-accessible full text available March 26, 2026
- 
            Free, publicly-accessible full text available January 6, 2026
- 
            Trust is crucial for ensuring the safety, security, and widespread adoption of automated vehicles (AVs), and if trust is lacking, drivers and the general public may hesitate to embrace this technology. This research seeks to investigate contextualized trust profiles in order to create personalized experiences for drivers in AVs with varying levels of reliability. A driving simulator experiment involving 70 participants revealed three distinct contextualized trust profiles (i.e., confident copilots, myopic pragmatists, and reluctant automators) identified through K-means clustering, and analyzed in relation to drivers' dynamic trust, dispositional trust, initial learned trust, personality traits, and emotions. The experiment encompassed eight scenarios where participants were requested to take over control from the AV in three conditions: a control condition, a false alarm condition, and a miss condition. To validate the models, a multinomial logistic regression model was constructed using the shapley additive explanations explainer to determine the most influential features in predicting contextualized trust profiles, achieving an F1-score of 0.90 and an accuracy of 0.89. In addition, an examination of how individual factors impact contextualized trust profiles provided valuable insights into trust dynamics from a user-centric perspective. The outcomes of this research hold significant implications for the development of personalized in-vehicle trust monitoring and calibration systems to modulate drivers' trust levels, thereby enhancing safety and user experience in automated driving.more » « lessFree, publicly-accessible full text available December 1, 2025
- 
            Free, publicly-accessible full text available December 1, 2025
- 
            Current PEFT methods for LLMs can achieve either high quality, efficient training, or scalable serving, but not all three simultaneously. To address this limitation, we investigate sparse fine-tuning and observe a remarkable improvement in generalization ability. Utilizing this key insight, we propose a family of \underline{S}tructured \underline{S}parse \underline{F}ine-\underline{T}uning (\textbf{\model}) methods for LLMs, which \textit{concurrently achieve state-of-the-art fine-tuning performance, training efficiency, and inference scalability}. \model \mbox{accomplishes this by ``selecting sparsely and computing densely". It selects a few} heads and channels in the MHA and FFN modules for each Transformer block, respectively. Next, it co-permutes weight matrices on both sides of the coupled structures in LLMs to connect the selected components in each layer into a dense submatrix. Finally, \model performs in-place gradient updates on all submatrices. Through theoretical analysis and empirical results, our method prevents overfitting and forgetting, delivers SOTA performance on both commonsense and arithmetic reasoning with 4.6$$\%$$ and 1.3$$\%$$ average improvements compared to LoRA, and surpasses full FT by 11.5$$\%$$ when generalizing to various domains after instruction tuning. Using our partial backpropagation algorithm, \model saves training memory up to 3$$\times$$ and improves latency by 1.5-2.7$$\times$$ compared to full FT, while delivering an average 10\% improvement over LoRA on both metrics. We further demonstrate that the weight updates in \model can be decoupled into adapters, enabling effective fusion, fast switch, and efficient parallelism for serving multiple fine-tuned models.more » « lessFree, publicly-accessible full text available December 10, 2025
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
 
                                     Full Text Available
                                                Full Text Available